Goto

Collaborating Authors

 working group


Video game firms found to have broken own UK industry rules on loot boxes

The Guardian

The UK government's decision to let technology companies self-regulate gambling-style loot boxes in video games has been called into question, after some of the developers put in charge of new industry guidelines broke their own rules. In the past six months, the advertising regulator has upheld complaints against three companies involved in drawing up industry rules, including the leading developer Electronic Arts (EA), for failing to disclose that their games contained loot boxes. An expert who submitted the complaints said he had found hundreds more examples of breaches but had only taken a handful to the Advertising Standards Authority (ASA) in order to highlight the problem. Loot boxes are in-game features that allow players to pay, with real money or virtual currency, to open a digital envelope containing random prizes, such as an outfit or a weapon for a character. Despite warnings from experts that loot boxes carry similar risks to gambling, the then Department for Digital, Culture, Media and Sport said in July 2022 it would not follow other countries, such as Belgium, in choosing to regulate them as gambling products.


House Democrats launch 'working group' on artificial intelligence

FOX News

Fox News correspondent Gillian Turner has the latest on the president's focus amid calls for an impeachment inquiry on'Special Report.' House Democrats are launching a working group aimed at crafting artificial intelligence policy, the latest attempt by federal lawmakers to wrap their heads around legislating the rapidly-advancing sector. The New Democrat Coalition, a group of nearly 100 House Democrats that touts itself as "pragmatic," unveiled the new initiative this week. Rep. Don Beyer, D-Va., one of the initiative's vice chairs, told Fox News Digital he hopes the working group will "help develop real, practicable ideas that will put guardrails in place for AI. "I continue to be focused on a variety of areas related to AI, including safety and security, transparency, the future of work, preventing civil rights abuses, health care and suicide prevention, and more, and have discussions ongoing about legislation in these areas with members of both parties," Beyer said. "Congress has to get up to speed on this issue, and I think the New Dems' AI working group will be a constructive setting for progress." The Biden administration and Congress are examining how to regulate AI. Working group Chair Rep. Derek Kilmer, D-Wash., suggested it could lay the groundwork for an AI regulatory framework in the House of Representatives. "We are already seeing how breakthroughs in this emerging technology present both great opportunities and challenges with potential disruptions for workers, for democracy, and for national security," Kilmer said. "As AI's applications expand and change, it is incumbent on lawmakers to address its unique opportunities and challenges by creating a regulatory framework that both encourages growth while guarding against potential risks." WHAT IS ARTIFICIAL INTELLIGENCE (AI)? Rep. Seth Moulton, D-Mass., another member of the working group and a Marine veteran, said he was concerned with how AI would "transform warfare" and called on Congress to put up responsible guardrails against the technology's most devastating possibilities. "It's going to be impossible for Congress to really stay ahead of AI, but what we can and should do is to take very seriously AI's most dangerous use cases and develop solutions and safeguards that apply directly to those cases," Moulton told Fox News Digital. "I'm also particularly concerned about how AI will transform warfare.


Practical Machine Learning in R: Nwanganga, Fred, Chapple, Mike + Free Shipping

#artificialintelligence

Mike Chapple is Teaching Professor of IT, Analytics, and Operations at the University of Notre Dame's Mendoza College of Business where he teaches graduate and undergraduate courses in cybersecurity and business analytics. Prior to joining Notre Dame's faculty, Mike served as Senior Director for IT Service Delivery at the University. In this role, he oversaw the information security, IT compliance, cloud computing, data governance, IT architecture, learning platforms, project management, strategic planning and product management functions for the Office of Information Technologies. Mike led Notre Dame's Cloud First strategy which moved 80% of the institution's IT services into the cloud over three years. Mike previously served as Senior Advisor to the Executive Vice President at Notre Dame for two years.


Setting the standard for AI in dermatology - AIMed

#artificialintelligence

Dr. Rubeta Matin, NHS Consultant Dermatologist, reveals the challenges of setting up a new national skin database to support the development of dermatological AI in the UK It's common knowledge that the chances of survival increase dramatically if melanoma is detected and treated early. However, many algorithm-based applications that claim to identify potentially dangerous-looking pigment on the skin have not been formally and appropriately validated in intervention studies. There are also not many systematic and rigorous reviews to discover the true accuracy of these skin cancer diagnosing algorithms, especially those that were tested in an artificial research setting that may not be representative of the real world. It's reasons like these that drive dermatologists to question whether the false assurance given by these applications may delay individuals from seeking medical advice. Last February, a new study published in the BMJ revealed mobile applications that assess the risks of suspicious moles may not be reliable enough to detect all forms of skin cancer.


UC creates recommendations for responsible use of artificial intelligence

#artificialintelligence

The University of California has created recommendations to create a path toward the responsible use of artificial intelligence in future UC endeavors. UC's increasing dependence on the use of AI has increased its overall productivity as an institution, according to the UC Office of the President, or UCOP. However, with the implementation of AI, there is also potential for problems to arise. To combat this, former UC President Janet Napolitano and current president Michael Drake created the Presidential Working Group on Artificial Intelligence, or the Working Group, in August 2020. The Working Group's final report noted that the group consists of 32 faculty and staff from all 10 UC campuses and an additional number of representatives from UC Legal and the Office of Ethics, Compliance and Audit Services, among other groups.


ENISA AI Threat Landscape Report Unveils Major Cybersecurity Challenges

#artificialintelligence

Today, the European Union Agency for Cybersecurity (ENISA) released its Artificial Intelligence Threat Landscape Report, unveiling the major cybersecurity challenges facing the AI ecosystem. ENISA's study takes a methodological approach at mapping the key players and threats in AI. The report follows up the priorities defined in the European Commission's 2020 AI White Paper. The ENISA Ad-Hoc Working Group on Artificial Intelligence Cybersecurity, with members from EU Institutions, academia and industry, provided input and supported the drafting of this report. The benefits of this emerging technology are significant, but so are the concerns, such as potential new avenues of manipulation and attack methods.


IAB AI Working Group to Establish Artificial Intelligence Standards

#artificialintelligence

The Interactive Advertising Bureau (IAB), the national trade association for the digital media and marketing industries, is focusing its AI Standards Working Group to develop artificial intelligence (AI) standards, best practices, use cases, and terminologies in an effort to scale AI and enable the industry on its full potential. The group is newly co-chaired by IBM Watson Advertising and Nielsen. The first release of 2021, "Artificial Intelligence Use Cases and Best Practices for Marketing," will help executive leaders, marketers, and technologists get the most from AI, and do it responsibly. Created for those already working with AI or looking to leverage it in their business, this guide draws directly from the real-world experience of co-chairs IBM Watson Advertising and Nielsen as well as top publishers, agencies, and ad tech companies in the industry. It's not an ivory tower overview of AI: it's a specific guide for what to do for executives that are in the thick of it.


A Decentralized Approach Towards Responsible AI in Social Ecosystems

Chu, Wenjing

arXiv.org Artificial Intelligence

For AI technology to fulfill its full promises, we must design effective mechanisms into the AI systems to support responsible AI behavior and curtail potential irresponsible use, e.g. in areas of privacy protection, human autonomy, robustness, and prevention of biases and discrimination in automated decision making. In this paper, we present a framework that provides computational facilities for parties in a social ecosystem to produce the desired responsible AI behaviors. To achieve this goal, we analyze AI systems at the architecture level and propose two decentralized cryptographic mechanisms for an AI system architecture: (1) using Autonomous Identity to empower human users, and (2) automating rules and adopting conventions within social institutions. We then propose a decentralized approach and outline the key concepts and mechanisms based on Decentralized Identifier (DID) and Verifiable Credentials (VC) for a general-purpose computational infrastructure to realize these mechanisms. We argue the case that a decentralized approach is the most promising path towards Responsible AI from both the computer science and social science perspectives.


VA's AI 'to-go' delivery model is morphing into a platform - FedScoop

#artificialintelligence

Interest in an artificial intelligence "to-go" delivery model is building with more than a dozen Department of Veterans Affairs sites looking to pilot modules, the agency's head of AI said Thursday. VA developed the initial module to assist its medical centers with COVID-19 individual risk prediction, but its hundreds of centers and thousands of facilities have other uses for the statistical models being tested, said Gil Alterovitz, VA's director of AI. Additional use cases haven't been chosen, but AI models will be packaged as embeddable software add-ons for rapid deployment based on the original. "We're now using that to generalize and essentially created a new platform so that artificial intelligence research and development can be added as modules in the future," Alterovitz said during day three of FedTalks, presented by FedScoop. Once the AI technology and health application have been vetted, any medical center will be able to access a module when VA shares a secure, internal weblink.


FSC Korea Kicks Off Working Group on Artificial Intelligence

#artificialintelligence

The working group will seek to promote the use of AI adoption in financial services as part of the government's'New Deal' policy initiative. South Korea's FSC (Financial Services Commission) on Thursday (16 July) held a kick-off meeting of a new working group tasked with promoting the use of artificial intelligence (AI) technology in financial services. "AI technology can help improve the effectiveness, inclusiveness and accountability while lowering costs in providing financial services," the FSC said, pointing to credit scoring, loan assessment, insurance and asset management as service areas that will benefit. The move to promote AI adoption in financial services is part of the government's KRW 114 trillion'New Deal' policy initiative, which centres on creating more tech sector jobs and promoting digitalisation of industries as a new post-coronavirus growth engine. At the meeting, participants discussed major trends and policies surrounding the application of AI in financial services, the working group's plans to promote the sector.